ˆf(S) = N. ˆf k (S). (1) k=1. Corollary: From Banach s Fixed Point Theorem, there exists a unique set A H(D) which is the fixed point of ˆf, i.e.

Size: px
Start display at page:

Download "ˆf(S) = N. ˆf k (S). (1) k=1. Corollary: From Banach s Fixed Point Theorem, there exists a unique set A H(D) which is the fixed point of ˆf, i.e."

Transcription

1 Lecture 35 Iterated Function Systems (cont d) Application of Banach s Fixed Point Theorem to IFS (conclusion) In the previous lecture, we focussed on the idea of the parallel operator ˆf associated with an N-map iterated function system as a mapping from sets to sets in R n. It was therefore necessary to discuss the idea of a distance function or metric between sets in R n. Such a metric is the Hausdorff metric. We ended the lecture by simply stating the main result, which we repeat below. Theorem: Let f be an N-map IFS composed of contraction mappings, f k : D D, 1 k N, where D R n. Let c k [0,1), 1 k N denote the contractivity factors of the IFS maps, f k, and define c = max 1 k N c k. (Note that 0 < c < 1.) Now let H(D) denote an appropriate space of subsets of D which is a complete metric space with respect to the Hausdorff metric h. (Details provided in the Appendix to this lecture.) Let ˆf be the parallel operator associated with this IFS defined as follows: For any set S H(D), ˆf(S) = N ˆf k (S). (1) k=1 Then for any two sets S 1,S 2 H(D), h(ˆf(s 1 ),ˆf(S 2 )) ch(s 1,S 2 ). (2) In other words, the mapping ˆf : H(D) H(D) is contractive (with respect to the Hausdorff metric). Corollary: From Banach s Fixed Point Theorem, there exists a unique set A H(D) which is the fixed point of ˆf, i.e., A = ˆf(A) N = ˆf k (A). (3) The set A is the attractor of the IFS. Furthermore, it is self-similar in that it may be expressed as a union of contracted copies of itself. k=1 The Appendix at the end of the notes for this lecture contains copies of handwritten notes by the instructor (from a course taught by him years ago) in which a proof of the above Theorem is presented. A number of results must be proved on the way, however. These, as well as the final proof, 410

2 are rather detailed and technical, and are presented for purposes of information. They are considered to be supplementary to the course. Using IFS attractors to approximate sets, including natural objects Note: Much of the following section was taken from the instructor s article, A Hitchhiker s Guide to Fractal-Based Function Approximation and Image Compression, a slightly expanded version of two articles which appeared in the February and August 1995 issues of the UW Faculty of Mathematics Alumni newpaper, Math Ties. It may be downloaded from the instructor s webpage. As a motivation for this section, we revisit the spleenwort fern attractor, shown below, due to Prof. Michael Barnsley and presented earlier in the course (Week 12, Lecture 33). Spleenwort Fern the attractor of a four-map IFS in R 2. As mentioned later in that lecture, with the creation of these fern-type attractors in 1984 came the idea of using IFS to approximate other shapes and figures occuring in nature and, ultimately, images in general. The IFS was seen to be a possible method of data compression. A high-resolution picture of a shaded fern normally requires on the order of one megabyte of computer memory for storage. Current compression methods might be able to cut this number by a factor of ten or so. However, 411

3 as an attractor of a four map IFS with probabilities, this fern may be described totally in terms of only 28 IFS parameters! This is a staggering amount of data compression. Not only are the storage requirements reduced but you can also send this small amount of data quickly over communications lines to others who could then decompress it and reconstruct the fern by simply iterating the IFS parallel operator ˆf. However, not all objects in nature in fact, very few exhibit the special self-similarity of the spleenwort fern. Nevertheless, as a starting point there remains the interesting general problem to determine to determine how well sets and images can be approximated by the attractors of IFS. We pose the so-called inverse problem for geometric approximation with IFS as follows: Given a target set S, can one find an IFS f = {f 1,f 2,,f N } whose attractor A approximates S to some desired degree of accuracy in an appropriate metric D (for example, the Hausdorff metric h)? At first, this appears to be a rather formidable problem. How does one start? By selecting an initial set of maps {f 1,f 2,,f N }, iterating the associated parallel operator ˆf to produce its attractor A and then comparing it to the target set S? And then perhaps altering some or all of the maps in some ways, looking at the effects of the changes on the resulting attractors, hopefully zeroing in on some final IFS? If we step back a little, we can come up with a strategy. In fact, it won t appear that strange after we outline it, since you are already accustomed to looking at the self-similarity of IFS attractors, e.g., the Sierpinski triangle in this way. Here is the strategy. Given a target set S, we are looking for the attractor A of an N-map IFS f which approximates it well, i.e., S A. (4) By, we mean that the S and A are close for the moment visually close will be sufficient. Now recall that A is the attractor of the IFS f so that A = N k=1 ˆf k (A). (5) 412

4 Substitution into Eq. (4) yields N S ˆf k (A). (6) k=1 But we now use Eq. (4) to replace A on the RHS and arrive at the final result, N S ˆf k (S). (7) k=1 In other words, in order to find an IFS with attractor A which approximates S, we look for an IFS, i.e., a set of maps f = {f 1,f 2, f n }, which, under the parallel action of the IFS operator ˆf, map the target set S as close as possible to itself. In this way, we are expressing the target set S as closely as possible as a union of contracted copies of itself. This idea should not seem that strange. After all, if the set S is self-similar, e.g., the attractor of an IFS, then the approximation in Eq. (7) becomes an equality. The basic idea is illustrated in the figure below. At the left, a leaf enclosed with a solid curve is viewed as an approximate union of four contracted copies of itself. Each smaller copy is obtained by an appropriate contractive IFS map f i. If we restrict ourselves to affine IFS maps in the plane, i.e. f i (x) = Ax + b, then the coefficients of each matrix A and associated column vector b a total of six unknown coefficients can be obtained from a knowledge of where three points of the original leaf S are mapped in the contracted copy ˆf i (S). We then expect that the attractor A of the resulting IFSˆf lies close to the target leaf S. The attractor A of the IFS is shown on the right. In general, the determination of optimal IFS maps by looking for approximate geometric selfsimilarities in a set is a very difficult problem with no simple solutions, especially if one wishes to automate the process. Fortunately, we can proceed by another route by realizing that there is much more to a picture than just geometric shapes. There is also shading. For example, a real fern has veins which may be darker than the outer extremeties of the fronds. Thus it is more natural to think of a picture as defining a function: At each point or pixel (x,y) in a photograph or a computer display (represented, for convenience, by the region X = [0,1] 2 ) there is an associated grey level u(x,y), which may assume a finite nonnegative value. (In practical applications, i.e. digitized images, each pixel can assume one of only a finite number of discrete values.) In the next section, we show one way in which shading can be produced on IFS attractors. It won t, however, be the ideal method of performing image approximation. A better method for images will involve a collaging of the graphs of functions, leading to an effective method of approximating and compressing images. This will be 413

5 Left: Approximating a leaf as a collage, i.e. a union of contracted copies of itself. Right: The attractor A of the four-map IFS obtained from the collage procedure on the left. discussed in the next lecture. Iterated Function Systems with Probabilities Let us recall our definition of an iterated function sytem from the past couple of lectures: Definition (Iterated function system (IFS)): Let f = {f 1,f 2,,f N } denote a set of N contraction mappings on a closed and bounded subset D R n, i.e., for each k {1,2, N}, f k : D D and there exists a constant 0 C k < such that d(f k (x),f k (y)) C k d(x,y) for all x,y D. (8) Associated with this set of contraction mappings is the parallel set-valued mapping ˆf, defined as follows: For any subset S D, ˆf(S) n = ˆf k (S), (9) k=1 where the ˆf k denote the set-valued mappings associated with the mappings f k. The set of maps f with parallel operator ˆf define an N-map iterated function system on the set D R n. 414

6 Also recall the main result associated regarding N-map Iterated Function Systems as defined above. Theorem: There exists a unique set A D which is the fixed point of the parallel IFS operator ˆf, i.e., A = ˆf(A) N = ˆf k (A). (10) Consequently, the set A is self-similar, i.e., A is the union of N geometrically-contracted copies of itself. k=1 We re now going to return to an idea that was used in Problem Set No. 5 to introduce you to IFS, namely the association of a set of probabilities p i with the IFS maps f i, as defined below. Definition (Iterated function system with probabilities (IFSP)): Let f = {f 1,f 2,,f N } denote a set of N contraction mappings on a closed and bounded subset D R n, i.e., for each k {1,2, N}, f k : D D and there exists a constant 0 C k < such that d(f k (x),f k (y)) C k d(x,y) for all x,y D. (11) Associated with each map f k is a probability p k [0,1] such that N p k = 1,. (12) k=1 Then the set of maps f with associated probabilities p = (p 1,p 2,,p N ) is known as an N-map iterated function systems with probabilities on the set D R n and will be denoted as (f,p). As before, the IFS part of an IFSP, i.e., the maps f k, 1 k N, will determine an attractor A that satisfies the self-similarity property in Eq. (10). But what about the probabilities p k? What role do they play? The answer is that they will uniquely determine a measure that is defined on the attractor A. It is beyond the scope of this course to discuss measures, which are intimately related to the theory of integration, in any detail. (As such, you are referred to a course such as PMATH 451, Measure and Integration. ) Here we simply mention how these measures can be visualized with the help of the random iteration algorithm that you examined in Problem Set No. 5. We actually have encountered 415

7 measures earlier in this course although they weren t mentioned explicitly when we examined the distribution of iterates of a chaotic dynamical system. And this is how we are going to discuss them in relation to iterated function systems with probabilities. They will determine the distribution of iterates produced by the random iteration algorithm, which is also known as the Chaos Game. Example No. 1: To illustrate, we consider the following two-map IFS on the interval [0, 1]: f 1 (x) = 1 2 x, f 2(x) = 1 2 x (13) It should not be difficult to see that the attractor of this IFS is the interval X = [0,1] since ˆf 1 : [0,1] [0, 1 2 ], ˆf2 : [0,1] [ 1,1], (14) 2 so that ˆf([0,1]) = ˆf1 ([0,1]) ˆf 2 ([0,1]) = [0, 1 2 ] [1 2,1] = [0,1]. (15) We now let p 1 and p 2 be probabilities associated with the IFS maps f 1 and f 2, respectively, such that p 1 +p 2 = 1. (16) We now consider the following random iteration algorithm involving these two maps and their probabilities: Starting with an x 0 [0,1], define x n+1 = f σn (x n ), σ n {1,2}, (17) where σ n is chosen from the set {1,2} with probabilities p 1 and p 2 respectively, i.e., P(σ n = 1) = p 1 P(σ n = 2) = p 2. (18) Case No. 1: Equal probabilities, i.e., p 1 = p 2 = 1 2. At each step of the algorithm in Eq. (17) there is an equal probability of choosing map f 1 or f 2. No matter where we start, i.e., what x 0 is, there is a 50% probability that x 1 will be located in [0, 1 2 ] and a 50% probability that it will be located in [ 1 2,1]. And since this is the case, there should be a 416

8 50% probability that x 2 will be located in [0, 1 2 ], etc.. Let us now performthe following experiment, very much like the one that we performedto analyze the distribution of iterates of chaotic dynamical systems. Once again, for an n sufficiently large, we divide the interval [0,1] into n subintervals I k of equal length using the partition points, x k = k x, 0 k n, where x = 1 n. (19) (In our section on chaotic dynamical systems, we used N instead of n. Unfortunately, N is now reserved for the number of maps in our IFS.) We now run the random iteration algorithm in (17) for a large number of iterations, M, counting the number of times, n k, that each subinterval I k is visited. We then define the following numbers, p k = n k, 1 k n. (20) M which are once again the fraction of total iterates {x n } M n=1 found in each subinterval I k. In the figure below we present a plot of the p k obtained using a partition of n = 1000 points and M = 10 7 iterates Thedistribution of the iterates x n seems to bequite uniform, in accordance with our earlier discussion. 417

9 Case No. 2: Unequal probabilities, e.g., p 1 = 2 5, p 2 = 3 5. At each step of the algorithm in Eq. (17) there is now a greater probability of choosing map f 2 over f 1. No matter where we start, i.e., what x 0 is, there is a 60% probability of finding x 1 in the interval [ 1 2,1] and a 40% probability of finding x 1 in the interval [0, 1 2 ]. As such, we might expect the distribution of the iterates to be somewhat slanted toward the right, possibly like this, But wait! If there is a higher probability of finding an iterate in the second half-interval [ 1 2,1] than the first, this is going to make itself known in each half interval as well. For example, there should be a higher probability of finding an iterate in the subinterval [ 1 4, 1 2 ] than in the subinterval [0, 1 4 ], etc., something like this,

10 The reader should begin to see that there is no end to this analysis. There should be higher and lower probabilities for the one-eighth intervals, something like this, If we run the random iteration algorithm for this case, using a partition of n = 1000 points and M = 10 7 iterates, the following plot of the p k fractions is obtained This is a histogram approximation of the so-called invariant measure, which we ll denote as µ and which is associated with the IFS, f 1 (x) = 1 2 x, f 2(x) = 1 2 x (21) with associated probabilities, p 1 = 2 5, p 2 = 3 5. (22) 419

11 Example No. 2: We now consider the following two-map IFS on [0,1], f 1 (x) = 3 5 x, f 2(x) = 3 5 x (23) The fixed point of f 1 is x 1 = 0. And the fixed point of f 2 is x 2 = 1. The attractor of this IFS is once again the interval [0, 1] since so that ˆf 1 : [0,1] [0, 3 5 ], ˆf2 : [0,1] [ 2,1], (24) 5 ˆf([0,1]) = ˆf1 ([0,1]) ˆf 2 ([0,1]) = [0, 3 5 ] [2,1] = [0,1]. (25) 5 This doesn t seem to be so interesting, but the fact that the images of [0,1] under the action of f 1 and f 2 overlap will make things interesting in terms of the underlying invariant measure. Let us once again assume equal probabilities, i.e., p 1 = p 2 = 1 2. Given an x 0 [0,1], there is an equal probability of choosing either f 1 or f 2 to apply to x 0. This means that there is an equal opportunity of finding x 1 in [0, 3 5 ] and [2 5,1]. But notice now that these two intervals OVERLAP. This means that there are two ways for x 0 to get mapped to the interval [ 2 5, 3 5 ]. This implies that there should be a slightly greater probability of finding x 1 in this subinterval than in the rest of [0,1], something like this, But this means that the middle parts of the smaller subintervals will be visited more often than their outer parts, etc.. If we run the random iteration algorithm for n = 1000 and M = 10 7 iterates, the following distribution is obtained. 420

12 This is a histogram approximation of the so-called invariant measure, µ, associated with the IFS, f 1 (x) = 3 5 x, f 2(x) = 3 5 x (26) with associated probabilities, p 1 = p 2 = 1 2. (27) Following the landmark papers by J. Hutchinson and M. Barnsley/S. Demko (see references at the end of the next lecture), researchers viewed invariant measures of IFSP as a way to produce shading in a set. After all, a set itself is not a good way to represent an object, unless one is interested only in its shape, in which case a binary, black-white, image is sufficient. That being said, if one is going to use IFSP measures to approximate shaded objects, then one would like to to have a little more control on the shading than what is possible by the invariant measure method shown above. Such methods to achieve more control essentially treat measures more like functions. As such, it is actually advantageous to devise IFS-type methods on functions, which is the subject of the next lecture. 421

13

14

15

16

17

18

19

20

21

22

23

24

25 Lecture 36 Iterated function systems for functions: Fractal transforms and fractal image coding The previous lecture concluded with the comment that we should regard a picture as being more than merely geometric shapes. There is also shading. As such, it is more natural to think of a picture as defining a function: At each point or pixel (x,y) in a photograph assumed to be black-and-white for the moment there is an associated grey level u(x,y) which assumes a finite and nonnegative value. (Here, (x,y) X = [0,1] 2, for convenience.) For example, consider Figure 1 below, a standard test case in image processing studies named Boat. The image is a pixel array. Each pixel assumes one of 256 shades of grey (0 = white, 255 = black). From the point of view of continuous real variables (x,y), the image is represented as a piecewise constant function u(x,y). If the grey level value of each pixel is interpreted as a value in the z direction, then the graph of the image function z = u(x,y) is a surface in R 3, as shown on the right. The red-blue spectrum of colours in the plot is used to characterize function values: Higher values are more red, lower values are more blue Figure 1. Left: The standard test-image, Boat, a pixel digital image, 8 bits per pixel. Right: The Boat image, viewed as a non-negative image function z = u(x,y). 422

26 Our goal is to set up an IFS-type approach to work with non-negative functions u : X R + instead of sets. Before writing any mathematics, let us illustrate schematically what can be done. For ease of presentation, we consider for the moment only one-dimensional images, i.e. positive real-valued functions u(x) where x [0,1]. An example is sketched in Figure 2(a). Suppose our IFS is composed of only two contractive maps f 1, f 2. Each of these functions f i will map the base space X = [0,1] to a subinterval ˆf i (X) contained in X. Let s choose f 1 (x) = 0.6x, f 2 (x) = 0.6x+0.4. (28) For reasons which will become clear below, it is important that ˆf 1 (X) and ˆf 2 (X) are not disjoint - they will have to overlap with each other, even if the overlap occurs only at one point. The first step in our IFS procedure is to make two copies of the graph of u(x) which are distorted to fit on the subsets ˆf 1 (X) = [0,0.6] and ˆf 2 (X) = [0.4,1] by shrinking and translating the graph in the x-direction. This is illustrated in Figure 2(b). Mathematically, the two component curves a 1 (x) and a 2 (x) in Figure 2(b) are given by a 1 (x) = u(f 1 1 (x)) x ˆf 1 (X), a 2 (x) = u(f 1 2 (x)) x ˆf 2 (X), (29) It is important to understand this equation. For example, the term f1 1 (x) is defined only for those x X at which the inverse of f 1 exists. For the inverse of f 1 to exist at x means that one must be able to get to x under the action of the map f 1, i.e., there exists a y X such that f 1 (y) = x. But this means that y = f 1 1 (x). It also means that x ˆf 1 (X), where ˆf 1 (X) = {f 1 (y), y X}. (30) Furthermore, note that since the map f 1 (x) is a contraction map, it follows that the function u 1 (x) is a contracted copy of u(x) which is situated on the set ˆf 1 (X). All of the above discussion also applies to the map f 2 (x). We re not finished, however, since some additional flexibility in modifying these curves would be desirable. Suppose that are allowed to modify the y (or grey level) values of each component function a i (x). For example, let us 1. multiply all values a 1 (x) by 0.5 and add 0.5, 2. multiply all values a 2 (x) by

27 The modified component functions, denoted as b 1 (x) and b 2 (x), respectively, are shown in Figure 2(c). What we have just done can be written as where b 1 (x) = φ 1 (a 1 (x)) = φ 1 (u(f 1 1 (x))) x ˆf 1 (X), b 2 (x) = φ 2 (a 2 (x)) = φ 2 (u(f 1 2 (x))) x ˆf 2 (X), (31) φ 1 (y) = 0.5y +0.5, φ 2 (y) = 0.75y, y R +. (32) The φ i are known as grey-level maps: They map (nonnegative) grey-level values to grey-level values. We now use the component functions b i in Figure 2(c) to construct a new function v(x). How do we do this? Well, there is no problem to define v(x) at values of x [0,1] which lie in only one of the two subsets ˆf i (X). For example, x 1 = 0.25 lies only in ˆf 1 (X). As such, we define v(x 1 ) = b 1 (x) = φ 1 (u(f 1 1 (x))). The same is true for x 2 = 0.75, which lies only in ˆf 2 (X). We define v(x 2 ) = b 2 (x) = φ 2 (u(f 1 2 (x))). Now what about points that lie in both ˆf 1 (X) and ˆf 2 (X), for example x 3 = 0.5? There are two possible components that we may use to define our resulting function v(x 3 ), namely b 1 (x 3 ) and b 2 (x 3 ). How do we suitably choose or combine these values to produce a resulting function v(x) for x in this region of overlap? To make a long story short, this is a rather complicated mathematical issue and was a subject of research, in particular at Waterloo. There are many possibilities of combining these values, including (1) adding them, (2) taking the maximum or (3) taking some weighted sum, for example, the average. In what follows, we consider the first case, i.e. we simply add the values. The resulting function v(x) is sketched in Figure 3(a). The observant reader may now be able to guess why we demanded that the subsets ˆf 1 ([0,1]) and ˆf 2 ([0,1]) overlap, touching at least at one point. If they didn t, then the union ˆf 1 (X) ˆf 2 (X) would have holes, i.e. points x [0,1] at which no component functions a i (x), hence b i (x), would be defined. (Remember the Cantor set?) Since want our IFS procedure to map functions on X to functions on X, the resulting function v(x) must be defined for all x X. 424

28 Y 1.5 y=u(x) X Figure 2(a): A sample one-dimensional image u(x) on [0,1] Y y = a1(x) y = a2(x) X Figure 2(b): The component functions given in Eq. (29) Y y = b1(x) y = b2(x) X Figure 2(c): The modified component functions given in Eq. (31). 425

29 The 2-map IFS f = {f 1,f 2 }, f i : X X, along with associated grey-level maps Φ = {φ 1,φ 2 }, φ i : R + R +, is referred to as an Iterated Function System with Grey-Level Maps (IFSM), (f,φ). What we did above was to associate with this IFSM an operator T which acts on a function u (Figure 2(a)) to produce a new function v = Tu (Figure 3(a)). Mathematically, the action of this operator may be written as follows: For any x X, v(x) = (Tu)(x) = N φ i (u(fi 1 (x))). (33) The prime on the summation signifies that for each x X we sum over only those i {1,2} for which i=1 a preimage f 1 i (x) exists. (Because of the no holes condition, it guaranteed that for each x X, there exists at least one such i value.) For x [0,0.4), i can be only 1. Likewise, for x (0.6,1], i = 2. For x [0.4,0.6], i can assume both values 1 and 2. The extension to a general N-map IFSM is straightforward. There is nothing preventing us from applying the T operator to the function v, so let w = Tv = T(Tu). Again, we take the graph of v and shrink it to form two copies, etc.. The result is shown in Figure 3(b). As T is applied repeatedly, we produce a sequence of functions which converges to a function ū in an appropriate metric space of functions, which we shall simply denote as F(X). In most applications, one employs the function space L 2 (X), the space of real-valued square-integrable functions on X, i.e., { L 2 (X) = f : X R, f 2 [ } 1/2 f(x) dx] 2 <. (34) X In this space, the distance between two functions u,v L 2 (X) is given by [ 1/2 d 2 (u,v) = u v 2 = u(x) v(x) dx] 2. (35) The function ū is sketched in Figure 3(c). (Because it has so many jumps, it is better viewed as a histogram plot.) X In general, under suitable conditions on the IFS maps f i and the grey-level maps φ i, the operator T associated with an IFSM (w,φ) is contractive in the space F(X). Therefore, from the Banach Contraction Mapping Theorem, it possesses a unique fixed point function ū F(X). This is precisely the case with the 2-map IFSM given above. Its attractor is sketched in Figure 3(c). Note 426

30 3 2.5 y = (Tu)(x) 2 Y X Figure 3(a): The resulting fractal transform function v(x) = (T u)(x) obtained from the component functions of Figure 2(c) y = (T(Tu))(x) 2 Y X Figure 3(b): The function w(x) = T(Tu)(x) = (T 2 u)(x): the result of two applications of the fractal transform operator T _ y = u(x) 2 Y X Figure 3(c): The attractor function ū = Tū of the two-map IFSM given in the text. 427

31 that from the fixed point property ū = Tū and Eq. (33), the attractor ū of an N-map IFSM satisfies the equation ū(x) = N φ i (ū(fi 1 (x))),. (36) i=1 In other words, the graph of ū satisfies a kind of self-tiling property: it may be written as a sum of distorted copies of itself. Before going on, let s consider the three-map IFSM composed of the following IFS maps and associated grey-level maps: f 1 (x) = 1 3 x, φ 1(y) = 1 2 y, f 2 (x) = 1 3 x+ 1 3, φ 2(y) = 1 2, (37) f 3 (x) = 1 3 x+ 2 3, φ 3(y) = 1 2 y + 1 2, Notice that ˆf 1 (X) = [0, 1 3 ] and ˆf 2 (X) = [ 1 3,1] overlap only at one point, x = 1 3. Likewise, ˆf2 (X) and ˆf 3 (X) overlap only at x = 2 3. The fixed point attractor function ū of this IFSM is sketched in Figure 4. It is known as the Devil s Staircase function. You can see that the attractor satisfies a self-tiling property: If you shrink the graph in the x-direction onto the interval [0, 1 3 ] and shrink the in y-direction by 1 3, you obtain one piece of it. The second copy, on [1 3, 2 3 ], is obtained by squashing the graph to produce a constant. The third copy, on [ 2 3,1], is just a translation of the first copy by 2 3 in the x-direction and 1 2 in the y-direction. (Note: The observant reader can complain that the function graphed in Figure 6 is not the fixed point of the IFSM operator T as defined in Eq. (37): The value v( 1 3 ) should be 3 2 and not 1 2, since x = 1 3 is a point of overlap. In fact, this will also happen at x = 2 3 as well as an infinity of points obtained by the action of the f i maps on x = 1 3 and 2 3. What a mess! Well, not quite, since the function in Figure 7 and the true attractor differ on a countable infinity of points. Therefore, the the L 2 distance between them is zero! The two functions belong to the same equivalence class in L 2 ([0,1]).) Now we have an IFS-method of acting on functions. Along with a set of IFS maps f i there is a corresponding set of grey-level maps φ i. Together, Under suitable conditions, the determine a unique attracting fixed point function ū which can be generated by iterating operator,t, defined in 428

32 x Figure 4: The Devil s staircase function, the attractor of the three-map IFSM given in Eq. (37). Eq. (vtu). As was the case with the geometrical IFS earlier, we are naturally led to the following inverse problem for function (or image) approximation: Given a target function (or image) v, can we find can we find an IFSM (f,φ) whose attractor ū approximates v, i.e., u v? (38) We can make this a little more mathematically precise: Given a target function (or image) v and an ǫ > 0, can we find an IFSM (f,φ) whose attractor ū approximates v to within ǫ, i.e. satisfies the inequality v ū < ǫ? Here, denotes an appropriate norm for the space of image functions considered. For the same reason as in the previous lecture, the above inverse problem may be reformulated as follows: Given a target function v, can we find an IFSM (f,φ) with associated operator T, such that u Tu? (39) In other words, we look for a fractal transform T that maps the target image u as close as possible to itself. Once again, we can make this a little more mathematically precise: 429

33 Given a target function u and an δ > 0, can we find an IFSM (f,φ) with associated operator T, such that u Tu < δ? (40) This basically asks the question, How well can we tile the graph of u with distorted copies of itself (subject to the operations given above)? Now, you might comment, it looks like we re right back where we started. We have to examine a graph for some kind of self-tiling symmetries, involving both geometry (the f i ) as well as grey-levels (the φ i ), which sounds quite difficult. The response is Yes, in general it is. However, it turns out that an enormous simplification is achieved if we give up the idea of trying to find the best IFS maps f i. Instead, we choose to work with a fixed set of IFS maps f i, 1 i N, and then find the best grey-level maps φ i associated with the f i. Question: What are these best grey-level maps? Answer: They are the φ i maps which will give the best collage or tiling of the function v with contracted copies of itself using the fixed IFS maps, w i. To illustrate, consider the target function v = x. Suppose that we work with the following two IFS maps on [0,1]: f 1 (x) = 1 2 x and f 2(x) = 1 2 x Note that ˆf 1 (X) = [0, 1 2 ] and ˆf 1 (X) = [ 1 2,1]. The two sets ˆf(X) overlap only at x = 1 2. (Note: It is very convenient to work with IFS maps for which the overlapping between subsets ˆf i (X) is minimal, referred to as the nonoverlapping case. In fact, this is the usual practice in applications. The remainder of this discussion will be restricted to the nonoverlapping case, so you can forget all of the earlier headaches involving overlapping and combining of fractal components.) We wish to find the best φ i maps, i.e. those that make v Tv small. Roughly speaking, we would like that v(x) (Tv)(x), x [0,1], (41) or at least for as many x [0,1] as possible. Recall from our earlier discussion that the first step in the action of the T operator is to produce copies of v which are contracted in the x-direction onto the subsets ˆf i (X). These copies, a i (x) = v(f 1 i (x)), i = 1,2, are shown in Figure 5(a) along with 430

34 the target v(x) for reference. The final action is to modify these functions a i (x) to produce functions b i (x) which are to be as close as possible to the pieces of the original target function v which sit on the subsets ˆf i (X). Recall that this is the role of the grey-level maps φ i since b i (x) = φ i (a i (x)) for all x ˆf i (X). Ideally, we would like grey-level maps that give the result v(x) b i (x) = φ i (v(f 1 i (x))), x ˆf i (X). (42) Thus if, for all x ˆf i (X), we plot v(x) vs. v(f 1 i (x)), then we have an idea of what the map φ i should look like. Figure 5(b) shows these plots for the two subsets ˆf i (X), i = 1,2. In this particular example, the exact form of the grey level maps can be derived: φ 1 (t) = 1 2 t and φ 2 (t) = 1 2 t I leave this as an exercise for the interested reader. In general, however, the functional form of the φ i grey level maps will not be known. In fact, such plots will generally produce quite scattered sets of points, often with several φ(t) values for a single t value. The goal is then to find the best grey level curves which pass through these data points. But that sounds like least squares, doesn t it? In most such fractal transform applications, only a straight line fit of the form φ i (t) = α i t+β i is assumed. For the functions in Figure 5(b), the best affine grey level maps associated with the two IFS maps given above are: φ 1 (t) = 1 2 t, φ 2 (t) t (43) The attractor of this 2-map IFSM, shown in Figure 5(c), is a very good approximation to the target function v(x) = x. Inprinciple, ifmoreifsmapsw i andassociated greylevel mapsφ i areemployed, albeitinacareful manner, then a better accuracy should be achieved. The primary goal of IFS-based methods of image compression, however, is not necessarily to provide approximations of arbitary accuracy, but rather to provide approximations of acceptable accuracy to the discerning eye with as few parameters as possible. As well, it is desirable to be able to compute the IFS parameters in a reasonable amount of time. 431

35 a1(x) a2(x) 0.6 v(x) Y X Figure 5(a): The target function v(x) = (x) on [0,1] along with its contractions a i (x) = v(w 1 i (x)), i = 1,2, where the two IFS maps are w 1 (x) = 1 2 x, w 2(x) = 1 2 x phi1(x) v(x) phi2(x) ai(x) Figure 5(b): Plots of v(x) vs a i (x) = v(w 1 i (x)) for x w i (X), i = 1,2. These graphs reveal the grey level maps φ i associated with the two-map IFSM v(x) X Figure 5(c): The attractor of the two-map IFSM with grey level maps given in Eq. (43). 432

36 Local IFSM That all being said, there is still a problem with the IFS method outlined above. It works fine for the examples that were presented but these are rather special cases all of the examples involved monotonic functions. In such cases, it is reasonable to expect that the function can be approximated well by combinations of spatially-contracted and range-modified copies of itself. In general, however, this is not guaranteed to work. A simple example is the target function u(x) = sinπx on [0,1], the graph of which is sketched in Figure 6 below u(x) X Figure 6: Target function u(x) = sinπx on [0,1] Suppose that we try to approximate u(x) = sinπx with an IFS composed with the two maps, f 1 (x) = 1 2 x f 2(x) = (44) It certainly does not look as if one could express u(x) = sinπx with two contracted copies of itself which lie on the intervals [0,1/2] and [1/2,1]. Nevertheless, if we try it anyway, we obtain the result showninfigure7. Thebest tiling of u(x) with two copies of itself is theconstant function, ū(x) = 2 π, which is the mean value of u(x) over [0,1]. If we stubbornly push ahead and try to express u(x) = sinπx with four copies of itself, i.e., use the four IFS maps, f 1 (x) = 1 4 x, f 2(x) = 1 4 x+ 1 4, f 3(x) = 1 4 x+ 1 2, f 4(x) = 1 4 x+ 3 4, (45) then the attractor of the best four-map IFS is shown in Figure 8. It appears to be a piecewise constant function as well. Of course, we can increase the number of IFS maps to produce better and better piecewise constant approximations to the target funcion u(x). But we really don t need IFS to do this. A better strategy, 433

37 u(x) X Figure 7: IFSM attractor obtained by trying to approximate u(x) = sinπx on [0,1] with two copies of itself u(x) X Figure 8: IFSM attractor obtained by trying to approximate u(x) = sinπx on [0,1] with four copies of itself. which follows a method A significant improvement, which follows a method introduced in 1989 by A. Jacquin, then a Ph.D. student of Prof. Barnsley, is to break up the function into pieces, i.e., consider it as a collection of functions defined over subintervals of the interval X. Instead of trying to express a function as a union of copies of spatially-contracted and range-modified copies of itself, the modified method, known as local IFS, tries to express each piece of a function as a spatially-contracted and range-modified copie of larger pieces of the function, not the entire function. We illustrate by considering once again the target function u(x) = sinπx. It can beviewed as a unionof two monotonic functions which are defined over the intervals [0, 1/2] and [1/2, 1]. But neither of these pieces can, in any way, be considered as spatially-contracted copies of other monotone functions extracted from u(x). As such, we consider u(x) as the union of four pieces, which are supported on the so-called range intervals, I 1 = [0,1/4], I 2 = [1/4,1/2],I 3 = [1/2,3/4], I 4 = [3/4,1]. (46) We now try to express each of these pieces as spatially-contracted and range-modified copies of the 434

38 two larger pieces of u(x) which are supported on the so-called domain intervals, J 1 = [0,1/2] J 2 [1/2,1]. (47) In principle, we can find IFS-type contraction maps which map each of the J k intervals to the I l intervals. But we can skip these details. We ll just present the final result. Figure 9 shows the attractor of the IFS that produces the best collage of u(x) = sinπx using this 4 domain block/2 range block method. It clearly provides a much better approximation than the earlier four-ifs-map method u(x) X Figure 9: IFSM attractor obtained by trying to approximate u(x) = sinπx on [0,1] with four copies of itself. Fractal image coding We now outline a simple block-based fractal coding scheme for a greyscale image function, for example, pixel Boat image shown back in Figure 1(a). In what follows, let X be an n 1 n 2 pixel array on which the image u is defined. Let R (n) denote a set of n n-pixel range subblocks R i, 1 i N R (n), which cover X, i.e., X = i R i. Let D (m) denote a set of m m-pixel domain D j, 1 j N D (m), where m = 2n. (The D i are not necessarily non-overlapping, but they should cover X.) These two partitions of the image are illustrated in Figure 10. Let w ij : D j R i denote the affine geometric transformations that map domain blocks D j to R i. There are 8 such constraction maps: 4 rotations, 2 diagonal flips, vertical and horizontal 435

39 R 1 D 1 R i R NR D j D ND Figure 10: Partitioning of an image into range and domain blocks. flips, so the maps should really be indexed as wij k, 1 k 8. In many cases, only the zero rotation map is employed so we can ignore the k index, which we shall do from here on for simplicity. Since we are now working in the discrete domain, i.e., pixels, as opposed to continuous spatial variables (x, y), some kind of decimation is required in order to map the larger 2n 2npixel domain blocks to the smaller n n-pixel range blocks. This is usually accomplished by a decimation procedure in which nonoverlapping 2 2 square pixel blocks of a domain block D j are replaced with one pixel. This definition of the w ij maps is a formal one in order to identify the spatial contractions that are involved in the fractal coding operation. The decimation of the domain block D j is accompanied by a decimation of the image block u(d j ) which is supported on it, i.e., the 2n 2n greyscale values that are associated with the pixels in D j. This is usually done as follows: The greyscale value assigned to the pixel replacing four pixels in a 2 2 square is the average of the four greyscale values over the square that has been decimated. The result is an n n-pixel image, to be denoted as ũ(d j ), which is the decimated version of u(d j ), For each range block R i, 1 i N R (n), compute the errors associated with the approximations, u(r i ) φ ij (u(w 1 ij (R i)) = φ ij ũ(d j ), for all 1 j N D (m), (48) where, for simplicity, we use affine greyscale transformations, The approximation is illustrated in Figure 11. φ(t) = αt+β. (49) In each such case, one is essentially determining the best straight line fit through n 2 data points (x k,y k ) R 2, where the x k are the greyscale values in image block ũ(d j ) and the y k are the 436

40 φ i z X z z = u Dj (x,y) D j w ij D j z = u Ri (x,y) R i R i Figure 11. Left: Range block R i and associated domain block D j. Right: Greyscale mapping φ from u(d j ) to u(r i ). corresponding greyscale values in image block u(r i ). (Remember that you may have to take account of rotations or inversions involved in the mapping w ij of D j to R j.) This can be done by the method of least squares, i.e., finding α and β which minimize the total squared error, n 2 (α,β) = (y i αx i +β) 2. (50) k=1 As is well known, minimization of 2 yields a system of linear equations in the unknowns α and β. Now let ij, 1 j D (m) denote theapproximation error associated with theapproximations to u(r i ) in Eq. (48). Choose the domain block j(i) that yields the lowest approximation error. The result of the above procedure: You have fractally encoded the image u. The following set of parameters for all range blocks R i, 1 i N R (n), j(i), index of best domain block, α i,β i, affine greyscale map parameters, (51) comprises the fractal code of the image function u. The fractal code defines a fractal transform T. The fixed point ū of T is an approximation the image u, i.e., u ū = Tū. (52) This is happening for the same reason as for our IFSM function approximation methods outlined in the previous section. Minimization of the approximation errors in Eq. (48) is actually minimizing the tiling error u Tu, (53) 437

41 originally presented in Eq. (40). We have found a fractal transform operator that maps the image u in pieces, i.e., in blocks close to itself. Moral of the story: You store the fractal code of u and generate its approximation ū by iterating T, as shown in the next example. In Figure 12, are shown the results of the above block-based IFSM procedure as applied to the Boat image. 8 8-pixel blocks were used for the range blocks R i and pixel blocks for the domain blocks D j. As such, there are 4096 range blocks and 1024 domain blocks. The bottom left image of Figure 12 is the fixed point attractor ū of the fractal transform defined by the fractal code obtained in this procedure. You may still be asking the question, How to we iterate the fractal transform T to obtain its fixed point attractor? Very briefly, we start with a seed image, u 0, which could be the zero image, i.e., an image for which the greyscale value at all pixels is zero. You then apply the fractal operator T to u 0 to obtain a new image u 1, and then continue with the iteration procedure, u n+1 = Tu n, n 0. (54) After a sufficient number of iterations (around 10-15) for 8 8 range blocks, the above iteration procedure will have converged. But perhaps we haven t answered the question completely. At each stage of the iteration procedure, i.e, at step n, when you wish to obtain u n+1 from u n, you must work with each of its range blocks R i separately. One replaces image block u n (R i ) supported on R i with a suitably modified version of the image u n (D j(i) ) on the domain block D j(i) as dictated by the fractal code. The image block u n (D j(i) ) will first have to be decimated. (This can be done at the start of each iteration step, so you don t have to be decimating each time.) It is also important to make a copy of u n so you don t modify the original while you are constructing u n+1! Remember that the fractal code is determined by approximating u n with parts of itself!) There are still a number of other questions and points that could be discussed. For example, better approximations to an image can be obtained by using smaller range blocks, R i, say 4 4-pixel 438

42 Figure 12. Clockwise, starting from top left: Original Boat image. The iterates u 1 and u 2 and fixed point approximation ū obtained by iteration of fractal transform operator. (u 0 = 0.) 8 8-pixel range blocks pixel domain blocks. blocks. But that means small domain blocks D j, i.e., 8 8 blocks, which means greater searching to find an optimal domain block for each range block. The searching of the domain pool for optimal blocks is already a disadvantage of the fractal coding method. That being said, various methods have been investigated and developed to speed up the coding time by reducing the size of the domain pool. This will generally produces less-than-optimal ap- 439

43 proximations but in many cases, the loss in fidelity is almost non-noticeable. Some references (these are old!) Original research papers: J. Hutchinson, Fractals and self-similarity, Indiana Univ. J. Math. 30, (1981). M.F. Barnsley and S. Demko, Iterated function systems and the global construction of fractals, Proc. Roy. Soc. London A399, (1985). A. Jacquin, Image coding based on a fractal theory of iterated contractive image transformations, IEEE Trans. Image Proc (1992). Books: M.F. Barnsley, Fractals Everywhere, Academic Press, New York (1988). M.F. Barnsley and L.P. Hurd, Fractal Image Compression, A.K. Peters, Wellesley, Mass. (1993). Y. Fisher, Fractal Image Compression, Theory and Application, Springer-Verlag (1995). N. Lu, Fractal Imaging, Academic Press (1997). Expository papers: M.F. Barnsley and A. Sloan, A better way to compress images, BYTE Magazine, January issue, pp (1988). Y. Fisher, A discussion of fractal image compression, in Chaos and Fractals, New Frontiers of Science, H.-O. Peitgen, H. Jürgens and D. Saupe, Springer-Verlag (1994). 440

Iterated Functions Systems and Fractal Coding

Iterated Functions Systems and Fractal Coding Qing Jun He 90121047 Math 308 Essay Iterated Functions Systems and Fractal Coding 1. Introduction Fractal coding techniques are based on the theory of Iterated Function Systems (IFS) founded by Hutchinson

More information

A Hitchhiker's Guide to. Edward R. Vrscay. University of Waterloo.

A Hitchhiker's Guide to. Edward R. Vrscay. University of Waterloo. A Hitchhiker's Guide to \Fractal-Based" Function Approximation and Image Compression Edward R. Vrscay Department of Applied Mathematics University of Waterloo Waterloo, Ontario, Canada NL G e-mail: ervrscay@links.uwaterloo.ca

More information

Fractal Image Compression

Fractal Image Compression Ball State University January 24, 2018 We discuss the works of Hutchinson, Vrscay, Kominek, Barnsley, Jacquin. Mandelbrot s Thesis 1977 Traditional geometry with its straight lines and smooth surfaces

More information

STORING IMAGES WITH FRACTAL IMAGE COMPRESSION

STORING IMAGES WITH FRACTAL IMAGE COMPRESSION STORING IMAGES WITH FRACTAL IMAGE COMPRESSION TYLER MCMILLEN 1. Introduction and Motivation One of the obvious limitations to the storage of images in a form a computer can read is storage space, memory.

More information

Fractals: Self-Similarity and Fractal Dimension Math 198, Spring 2013

Fractals: Self-Similarity and Fractal Dimension Math 198, Spring 2013 Fractals: Self-Similarity and Fractal Dimension Math 198, Spring 2013 Background Fractal geometry is one of the most important developments in mathematics in the second half of the 20th century. Fractals

More information

A Review of Image Compression Techniques

A Review of Image Compression Techniques A Review of Image Compression Techniques Rajesh, Gagan Kumar Computer Science and Engineering Department, MIET College, Mohri, Kurukshetra, Haryana, India Abstract: The demand for images, video sequences

More information

Lecture 3: Some Strange Properties of Fractal Curves

Lecture 3: Some Strange Properties of Fractal Curves Lecture 3: Some Strange Properties of Fractal Curves I have been a stranger in a strange land. Exodus 2:22 1. Fractal Strangeness Fractals have a look and feel that is very different from ordinary curves.

More information

Lecture 6: Fractals from Iterated Function Systems. He draweth also the mighty with his power: Job 24:22

Lecture 6: Fractals from Iterated Function Systems. He draweth also the mighty with his power: Job 24:22 Lecture 6: Fractals from Iterated Function Systems He draweth also the mighty with his power: Job 24:22 1. Generating Fractals by Iterating Transformations The Sierpinski gasket and the Koch snowflake

More information

Fractal Coding. CS 6723 Image Processing Fall 2013

Fractal Coding. CS 6723 Image Processing Fall 2013 Fractal Coding CS 6723 Image Processing Fall 2013 Fractals and Image Processing The word Fractal less than 30 years by one of the history s most creative mathematician Benoit Mandelbrot Other contributors:

More information

SELF-SIMILARITY IN IMAGING, 20 YEARS AFTER FRACTALS EVERYWHERE. Mehran Ebrahimi and Edward R.Vrscay

SELF-SIMILARITY IN IMAGING, 20 YEARS AFTER FRACTALS EVERYWHERE. Mehran Ebrahimi and Edward R.Vrscay SELF-SIMILARITY IN IMAGING, 2 YEARS AFTER FRACTALS EVERYWHERE Mehran Ebrahimi and Edward R.Vrscay Department of Applied Mathematics Faculty of Mathematics, University of Waterloo Waterloo, Ontario, Canada

More information

Lecture 6: Fractals from Iterated Function Systems. He draweth also the mighty with his power: Job 24:22

Lecture 6: Fractals from Iterated Function Systems. He draweth also the mighty with his power: Job 24:22 Lecture 6: Fractals from Iterated Function Systems He draweth also the mighty with his power: Job 24:22 1. Fractals by Iteration The Sierpinski gasket and the Koch snowflake can both be generated in LOGO

More information

Generation of 3D Fractal Images for Mandelbrot and Julia Sets

Generation of 3D Fractal Images for Mandelbrot and Julia Sets 178 Generation of 3D Fractal Images for Mandelbrot and Julia Sets Bulusu Rama #, Jibitesh Mishra * # Department of Computer Science and Engineering, MLR Institute of Technology Hyderabad, India 1 rama_bulusu@yahoo.com

More information

Matrices. Chapter Matrix A Mathematical Definition Matrix Dimensions and Notation

Matrices. Chapter Matrix A Mathematical Definition Matrix Dimensions and Notation Chapter 7 Introduction to Matrices This chapter introduces the theory and application of matrices. It is divided into two main sections. Section 7.1 discusses some of the basic properties and operations

More information

Inverse and Implicit functions

Inverse and Implicit functions CHAPTER 3 Inverse and Implicit functions. Inverse Functions and Coordinate Changes Let U R d be a domain. Theorem. (Inverse function theorem). If ϕ : U R d is differentiable at a and Dϕ a is invertible,

More information

Space Filling Curves and Hierarchical Basis. Klaus Speer

Space Filling Curves and Hierarchical Basis. Klaus Speer Space Filling Curves and Hierarchical Basis Klaus Speer Abstract Real world phenomena can be best described using differential equations. After linearisation we have to deal with huge linear systems of

More information

Symmetric Fractals. Seeking Sangaku Ramanujan, Hardy, and Ono

Symmetric Fractals. Seeking Sangaku Ramanujan, Hardy, and Ono Symmetric Fractals Seeking Sangaku Ramanujan, Hardy, and Ono Published by the Mathematical Association of America : : November 2016 Figure 1. Clockwise from far left, the Sierpinski triangle, the Koch

More information

Fractals and the Chaos Game

Fractals and the Chaos Game Math: Outside the box! Fractals and the Chaos Game Monday February 23, 2009 3:30-4:20 IRMACS theatre, ASB 10900 Randall Pyke Senior Lecturer Department of Mathematics, SFU A Game. Is this a random walk?

More information

A GEOMETRIC INTERPRETATION OF COMPLEX ZEROS OF QUADRATIC FUNCTIONS

A GEOMETRIC INTERPRETATION OF COMPLEX ZEROS OF QUADRATIC FUNCTIONS A GEOMETRIC INTERPRETATION OF COMPLEX ZEROS OF QUADRATIC FUNCTIONS Joseph Pastore and Alan Sultan Queens College, City Univeristy of New Yor, Queens, NY 11367 Abstract: Most high school mathematics students

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

THREE LECTURES ON BASIC TOPOLOGY. 1. Basic notions.

THREE LECTURES ON BASIC TOPOLOGY. 1. Basic notions. THREE LECTURES ON BASIC TOPOLOGY PHILIP FOTH 1. Basic notions. Let X be a set. To make a topological space out of X, one must specify a collection T of subsets of X, which are said to be open subsets of

More information

MA651 Topology. Lecture 4. Topological spaces 2

MA651 Topology. Lecture 4. Topological spaces 2 MA651 Topology. Lecture 4. Topological spaces 2 This text is based on the following books: Linear Algebra and Analysis by Marc Zamansky Topology by James Dugundgji Fundamental concepts of topology by Peter

More information

Lecture 3: Art Gallery Problems and Polygon Triangulation

Lecture 3: Art Gallery Problems and Polygon Triangulation EECS 396/496: Computational Geometry Fall 2017 Lecture 3: Art Gallery Problems and Polygon Triangulation Lecturer: Huck Bennett In this lecture, we study the problem of guarding an art gallery (specified

More information

Bulgarian Math Olympiads with a Challenge Twist

Bulgarian Math Olympiads with a Challenge Twist Bulgarian Math Olympiads with a Challenge Twist by Zvezdelina Stankova Berkeley Math Circle Beginners Group September 0, 03 Tasks throughout this session. Harder versions of problems from last time appear

More information

Divisibility Rules and Their Explanations

Divisibility Rules and Their Explanations Divisibility Rules and Their Explanations Increase Your Number Sense These divisibility rules apply to determining the divisibility of a positive integer (1, 2, 3, ) by another positive integer or 0 (although

More information

Complexity is around us. Part one: the chaos game

Complexity is around us. Part one: the chaos game Complexity is around us. Part one: the chaos game Dawid Lubiszewski Complex phenomena like structures or processes are intriguing scientists around the world. There are many reasons why complexity is a

More information

Fun with Fractals Saturday Morning Math Group

Fun with Fractals Saturday Morning Math Group Fun with Fractals Saturday Morning Math Group Alistair Windsor Fractals Fractals are amazingly complicated patterns often produced by very simple processes. We will look at two different types of fractals

More information

Topology and Topological Spaces

Topology and Topological Spaces Topology and Topological Spaces Mathematical spaces such as vector spaces, normed vector spaces (Banach spaces), and metric spaces are generalizations of ideas that are familiar in R or in R n. For example,

More information

Topology - I. Michael Shulman WOMP 2004

Topology - I. Michael Shulman WOMP 2004 Topology - I Michael Shulman WOMP 2004 1 Topological Spaces There are many different ways to define a topological space; the most common one is as follows: Definition 1.1 A topological space (often just

More information

Chapter 3. Set Theory. 3.1 What is a Set?

Chapter 3. Set Theory. 3.1 What is a Set? Chapter 3 Set Theory 3.1 What is a Set? A set is a well-defined collection of objects called elements or members of the set. Here, well-defined means accurately and unambiguously stated or described. Any

More information

6.001 Notes: Section 6.1

6.001 Notes: Section 6.1 6.001 Notes: Section 6.1 Slide 6.1.1 When we first starting talking about Scheme expressions, you may recall we said that (almost) every Scheme expression had three components, a syntax (legal ways of

More information

Parallel and perspective projections such as used in representing 3d images.

Parallel and perspective projections such as used in representing 3d images. Chapter 5 Rotations and projections In this chapter we discuss Rotations Parallel and perspective projections such as used in representing 3d images. Using coordinates and matrices, parallel projections

More information

Math 734 Aug 22, Differential Geometry Fall 2002, USC

Math 734 Aug 22, Differential Geometry Fall 2002, USC Math 734 Aug 22, 2002 1 Differential Geometry Fall 2002, USC Lecture Notes 1 1 Topological Manifolds The basic objects of study in this class are manifolds. Roughly speaking, these are objects which locally

More information

Topology Proceedings. COPYRIGHT c by Topology Proceedings. All rights reserved.

Topology Proceedings. COPYRIGHT c by Topology Proceedings. All rights reserved. Topology Proceedings Web: http://topology.auburn.edu/tp/ Mail: Topology Proceedings Department of Mathematics & Statistics Auburn University, Alabama 36849, USA E-mail: topolog@auburn.edu ISSN: 0146-4124

More information

Lecture 3: Linear Classification

Lecture 3: Linear Classification Lecture 3: Linear Classification Roger Grosse 1 Introduction Last week, we saw an example of a learning task called regression. There, the goal was to predict a scalar-valued target from a set of features.

More information

Product constructions for transitive decompositions of graphs

Product constructions for transitive decompositions of graphs 116 Product constructions for transitive decompositions of graphs Geoffrey Pearce Abstract A decomposition of a graph is a partition of the edge set, giving a set of subgraphs. A transitive decomposition

More information

Exploring Fractals through Geometry and Algebra. Kelly Deckelman Ben Eggleston Laura Mckenzie Patricia Parker-Davis Deanna Voss

Exploring Fractals through Geometry and Algebra. Kelly Deckelman Ben Eggleston Laura Mckenzie Patricia Parker-Davis Deanna Voss Exploring Fractals through Geometry and Algebra Kelly Deckelman Ben Eggleston Laura Mckenzie Patricia Parker-Davis Deanna Voss Learning Objective and skills practiced Students will: Learn the three criteria

More information

Formal Model. Figure 1: The target concept T is a subset of the concept S = [0, 1]. The search agent needs to search S for a point in T.

Formal Model. Figure 1: The target concept T is a subset of the concept S = [0, 1]. The search agent needs to search S for a point in T. Although this paper analyzes shaping with respect to its benefits on search problems, the reader should recognize that shaping is often intimately related to reinforcement learning. The objective in reinforcement

More information

CHAPTER 4 FRACTAL IMAGE COMPRESSION

CHAPTER 4 FRACTAL IMAGE COMPRESSION 49 CHAPTER 4 FRACTAL IMAGE COMPRESSION 4.1 INTRODUCTION Fractals were first introduced in the field of geometry. The birth of fractal geometry is traced back to the IBM mathematician B. Mandelbrot and

More information

THE DEGREE OF POLYNOMIAL CURVES WITH A FRACTAL GEOMETRIC VIEW

THE DEGREE OF POLYNOMIAL CURVES WITH A FRACTAL GEOMETRIC VIEW 225 THE DEGREE OF POLYNOMIAL CURVES WITH A FRACTAL GEOMETRIC VIEW S. Mohanty 1 and A. Misra 2 1 Department of Computer Science and Applications, Utkal University, Bhubaneswar-751004, INDIA. 2 Silicon Institute

More information

(i =0..N-1) : with S = {W 0. }. See {1][2][3][4][5]. For the case where the contraction maps are affine transformations, as per W x ,W 3 ,W 2 ,W 5

(i =0..N-1) : with S = {W 0. }. See {1][2][3][4][5]. For the case where the contraction maps are affine transformations, as per W x ,W 3 ,W 2 ,W 5 Harvey A. Cohen, Map Colour Rendering of IFS Fractals, Proceedings, Australian Pattern Recognition Society orkshop on Colour Imaging and Applications, Canberra 5-7 December, 1994, pp 43-48. This is the

More information

SECTION 1.3: BASIC GRAPHS and SYMMETRY

SECTION 1.3: BASIC GRAPHS and SYMMETRY (Section.3: Basic Graphs and Symmetry).3. SECTION.3: BASIC GRAPHS and SYMMETRY LEARNING OBJECTIVES Know how to graph basic functions. Organize categories of basic graphs and recognize common properties,

More information

Does it Look Square? Hexagonal Bipyramids, Triangular Antiprismoids, and their Fractals

Does it Look Square? Hexagonal Bipyramids, Triangular Antiprismoids, and their Fractals Does it Look Square? Hexagonal Bipyramids, Triangular Antiprismoids, and their Fractals Hideki Tsuiki Graduate School of Human and Environmental Studies Kyoto University Yoshida-Nihonmatsu, Kyoto 606-8501,

More information

5.7. Fractal compression Overview

5.7. Fractal compression Overview 5.7. Fractal compression Overview 1. Introduction 2. Principles 3. Encoding 4. Decoding 5. Example 6. Evaluation 7. Comparison 8. Literature References 1 Introduction (1) - General Use of self-similarities

More information

ITERATED FUNCTION SYSTEMS WITH SYMMETRY IN THE HYPERBOLIC PLANE (Preprint)

ITERATED FUNCTION SYSTEMS WITH SYMMETRY IN THE HYPERBOLIC PLANE (Preprint) ITERATED FUNCTION SYSTEMS WITH SYMMETRY IN THE HYPERBOLIC PLANE (Preprint) BRUCE M. ADCOCK 38 Meadowbrook Road, Watervliet NY 12189-1111, U.S.A. e-mail: adcockb@lafayette.edu KEVIN C. JONES 3329 25th Avenue,

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

MAT 003 Brian Killough s Instructor Notes Saint Leo University

MAT 003 Brian Killough s Instructor Notes Saint Leo University MAT 003 Brian Killough s Instructor Notes Saint Leo University Success in online courses requires self-motivation and discipline. It is anticipated that students will read the textbook and complete sample

More information

An introduction to plotting data

An introduction to plotting data An introduction to plotting data Eric D. Black California Institute of Technology February 25, 2014 1 Introduction Plotting data is one of the essential skills every scientist must have. We use it on a

More information

The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a

The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a coordinate system and then the measuring of the point with

More information

= f (a, b) + (hf x + kf y ) (a,b) +

= f (a, b) + (hf x + kf y ) (a,b) + Chapter 14 Multiple Integrals 1 Double Integrals, Iterated Integrals, Cross-sections 2 Double Integrals over more general regions, Definition, Evaluation of Double Integrals, Properties of Double Integrals

More information

Session 27: Fractals - Handout

Session 27: Fractals - Handout Session 27: Fractals - Handout Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line. Benoit Mandelbrot (1924-2010)

More information

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA Chapter 1 : BioMath: Transformation of Graphs Use the results in part (a) to identify the vertex of the parabola. c. Find a vertical line on your graph paper so that when you fold the paper, the left portion

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

BASIC CONCEPTS IN 1D

BASIC CONCEPTS IN 1D CHAPTER BASIC CONCEPTS IN D In the preceding chapter we introduced a brief list of basic concepts of discrete dynamics. Here, we expand on these concepts in the one-dimensional context, in which, uniquely,

More information

Don t just read it; fight it! Ask your own questions, look for your own examples, discover your own proofs. Is the hypothesis necessary?

Don t just read it; fight it! Ask your own questions, look for your own examples, discover your own proofs. Is the hypothesis necessary? Don t just read it; fight it! Ask your own questions, look for your own examples, discover your own proofs. Is the hypothesis necessary? Is the converse true? What happens in the classical special case?

More information

Hei nz-ottopeitgen. Hartmut Jürgens Dietmar Sau pe. Chaos and Fractals. New Frontiers of Science

Hei nz-ottopeitgen. Hartmut Jürgens Dietmar Sau pe. Chaos and Fractals. New Frontiers of Science Hei nz-ottopeitgen Hartmut Jürgens Dietmar Sau pe Chaos and Fractals New Frontiers of Science Preface Authors VU X I Foreword 1 Mitchell J. Feigenbaum Introduction: Causality Principle, Deterministic

More information

AXIOMS FOR THE INTEGERS

AXIOMS FOR THE INTEGERS AXIOMS FOR THE INTEGERS BRIAN OSSERMAN We describe the set of axioms for the integers which we will use in the class. The axioms are almost the same as what is presented in Appendix A of the textbook,

More information

Generating Functions

Generating Functions 6.04/8.06J Mathematics for Computer Science Srini Devadas and Eric Lehman April 7, 005 Lecture Notes Generating Functions Generating functions are one of the most surprising, useful, and clever inventions

More information

As a consequence of the operation, there are new incidences between edges and triangles that did not exist in K; see Figure II.9.

As a consequence of the operation, there are new incidences between edges and triangles that did not exist in K; see Figure II.9. II.4 Surface Simplification 37 II.4 Surface Simplification In applications it is often necessary to simplify the data or its representation. One reason is measurement noise, which we would like to eliminate,

More information

Interactive Math Glossary Terms and Definitions

Interactive Math Glossary Terms and Definitions Terms and Definitions Absolute Value the magnitude of a number, or the distance from 0 on a real number line Addend any number or quantity being added addend + addend = sum Additive Property of Area the

More information

II. Linear Programming

II. Linear Programming II. Linear Programming A Quick Example Suppose we own and manage a small manufacturing facility that produced television sets. - What would be our organization s immediate goal? - On what would our relative

More information

Fractals: a way to represent natural objects

Fractals: a way to represent natural objects Fractals: a way to represent natural objects In spatial information systems there are two kinds of entity to model: natural earth features like terrain and coastlines; human-made objects like buildings

More information

Chapter 1. Math review. 1.1 Some sets

Chapter 1. Math review. 1.1 Some sets Chapter 1 Math review This book assumes that you understood precalculus when you took it. So you used to know how to do things like factoring polynomials, solving high school geometry problems, using trigonometric

More information

Interpolation by Spline Functions

Interpolation by Spline Functions Interpolation by Spline Functions Com S 477/577 Sep 0 007 High-degree polynomials tend to have large oscillations which are not the characteristics of the original data. To yield smooth interpolating curves

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

Lecture 25: Bezier Subdivision. And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10

Lecture 25: Bezier Subdivision. And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10 Lecture 25: Bezier Subdivision And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10 1. Divide and Conquer If we are going to build useful

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane?

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? Intersecting Circles This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? This is a problem that a programmer might have to solve, for example,

More information

Points covered an odd number of times by translates

Points covered an odd number of times by translates Points covered an odd number of times by translates Rom Pinchasi August 5, 0 Abstract Let T be a fixed triangle and consider an odd number of translated copies of T in the plane. We show that the set of

More information

Here are some of the more basic curves that we ll need to know how to do as well as limits on the parameter if they are required.

Here are some of the more basic curves that we ll need to know how to do as well as limits on the parameter if they are required. 1 of 10 23/07/2016 05:15 Paul's Online Math Notes Calculus III (Notes) / Line Integrals / Line Integrals - Part I Problems] [Notes] [Practice Problems] [Assignment Calculus III - Notes Line Integrals Part

More information

A TECHNOLOGY-ENHANCED FRACTAL/CHAOS COURSE. Taeil Yi University of Texas at Brownsville 80 Fort Brown Brownsville, TX

A TECHNOLOGY-ENHANCED FRACTAL/CHAOS COURSE. Taeil Yi University of Texas at Brownsville 80 Fort Brown Brownsville, TX A TECHNOLOGY-ENHANCED FRACTAL/CHAOS COURSE Taeil Yi University of Texas at Brownsville 80 Fort Brown Brownsville, TX 78520 tyi@utb.edu Abstract Easy construction of fractal figures is the most valuable

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

Chapel Hill Math Circle: Symmetry and Fractals

Chapel Hill Math Circle: Symmetry and Fractals Chapel Hill Math Circle: Symmetry and Fractals 10/7/17 1 Introduction This worksheet will explore symmetry. To mathematicians, a symmetry of an object is, roughly speaking, a transformation that does not

More information

Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS

Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS Definition 1.1. Let X be a set and T a subset of the power set P(X) of X. Then T is a topology on X if and only if all of the following

More information

A GRAPH FROM THE VIEWPOINT OF ALGEBRAIC TOPOLOGY

A GRAPH FROM THE VIEWPOINT OF ALGEBRAIC TOPOLOGY A GRAPH FROM THE VIEWPOINT OF ALGEBRAIC TOPOLOGY KARL L. STRATOS Abstract. The conventional method of describing a graph as a pair (V, E), where V and E repectively denote the sets of vertices and edges,

More information

Fractal Image Coding (IFS) Nimrod Peleg Update: Mar. 2008

Fractal Image Coding (IFS) Nimrod Peleg Update: Mar. 2008 Fractal Image Coding (IFS) Nimrod Peleg Update: Mar. 2008 What is a fractal? A fractal is a geometric figure, often characterized as being self-similar : irregular, fractured, fragmented, or loosely connected

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

A TESSELLATION FOR ALGEBRAIC SURFACES IN CP 3

A TESSELLATION FOR ALGEBRAIC SURFACES IN CP 3 A TESSELLATION FOR ALGEBRAIC SURFACES IN CP 3 ANDREW J. HANSON AND JI-PING SHA In this paper we present a systematic and explicit algorithm for tessellating the algebraic surfaces (real 4-manifolds) F

More information

Mathematics 350 Section 6.3 Introduction to Fractals

Mathematics 350 Section 6.3 Introduction to Fractals Mathematics 350 Section 6.3 Introduction to Fractals A fractal is generally "a rough or fragmented geometric shape that is self-similar, which means it can be split into parts, each of which is (at least

More information

A point is pictured by a dot. While a dot must have some size, the point it represents has no size. Points are named by capital letters..

A point is pictured by a dot. While a dot must have some size, the point it represents has no size. Points are named by capital letters.. Chapter 1 Points, Lines & Planes s we begin any new topic, we have to familiarize ourselves with the language and notation to be successful. My guess that you might already be pretty familiar with many

More information

Discrete Dynamical Systems: A Pathway for Students to Become Enchanted with Mathematics

Discrete Dynamical Systems: A Pathway for Students to Become Enchanted with Mathematics Discrete Dynamical Systems: A Pathway for Students to Become Enchanted with Mathematics Robert L. Devaney, Professor Department of Mathematics Boston University Boston, MA 02215 USA bob@bu.edu Abstract.

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

Assignment 8; Due Friday, March 10

Assignment 8; Due Friday, March 10 Assignment 8; Due Friday, March 10 The previous two exercise sets covered lots of material. We ll end the course with two short assignments. This one asks you to visualize an important family of three

More information

Naming Angles. One complete rotation measures 360º. Half a rotation would then measure 180º. A quarter rotation would measure 90º.

Naming Angles. One complete rotation measures 360º. Half a rotation would then measure 180º. A quarter rotation would measure 90º. Naming Angles What s the secret for doing well in geometry? Knowing all the angles. An angle can be seen as a rotation of a line about a fixed point. In other words, if I were mark a point on a paper,

More information

y= sin( x) y= cos( x)

y= sin( x) y= cos( x) . The graphs of sin(x) and cos(x). Now I am going to define the two basic trig functions: sin(x) and cos(x). Study the diagram at the right. The circle has radius. The arm OP starts at the positive horizontal

More information

1. Lecture notes on bipartite matching

1. Lecture notes on bipartite matching Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans February 5, 2017 1. Lecture notes on bipartite matching Matching problems are among the fundamental problems in

More information

Linear Algebra Part I - Linear Spaces

Linear Algebra Part I - Linear Spaces Linear Algebra Part I - Linear Spaces Simon Julier Department of Computer Science, UCL S.Julier@cs.ucl.ac.uk http://moodle.ucl.ac.uk/course/view.php?id=11547 GV01 - Mathematical Methods, Algorithms and

More information

LINEAR PROGRAMMING: A GEOMETRIC APPROACH. Copyright Cengage Learning. All rights reserved.

LINEAR PROGRAMMING: A GEOMETRIC APPROACH. Copyright Cengage Learning. All rights reserved. 3 LINEAR PROGRAMMING: A GEOMETRIC APPROACH Copyright Cengage Learning. All rights reserved. 3.1 Graphing Systems of Linear Inequalities in Two Variables Copyright Cengage Learning. All rights reserved.

More information

Math 182. Assignment #4: Least Squares

Math 182. Assignment #4: Least Squares Introduction Math 182 Assignment #4: Least Squares In any investigation that involves data collection and analysis, it is often the goal to create a mathematical function that fits the data. That is, a

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

CHAPTER 6 Parametric Spline Curves

CHAPTER 6 Parametric Spline Curves CHAPTER 6 Parametric Spline Curves When we introduced splines in Chapter 1 we focused on spline curves, or more precisely, vector valued spline functions. In Chapters 2 and 4 we then established the basic

More information

CS1114 Assignment 5, Part 1

CS1114 Assignment 5, Part 1 CS4 Assignment 5, Part out: Friday, March 27, 2009. due: Friday, April 3, 2009, 5PM. This assignment covers three topics in two parts: interpolation and image transformations (Part ), and feature-based

More information

Lecture 5. If, as shown in figure, we form a right triangle With P1 and P2 as vertices, then length of the horizontal

Lecture 5. If, as shown in figure, we form a right triangle With P1 and P2 as vertices, then length of the horizontal Distance; Circles; Equations of the form Lecture 5 y = ax + bx + c In this lecture we shall derive a formula for the distance between two points in a coordinate plane, and we shall use that formula to

More information

Fractal Image Compression on a Pseudo Spiral Architecture

Fractal Image Compression on a Pseudo Spiral Architecture Fractal Image Compression on a Pseudo Spiral Huaqing Wang, Meiqing Wang, Tom Hintz, Xiangjian He, Qiang Wu Faculty of Information Technology, University of Technology, Sydney PO Box 123, Broadway 2007,

More information

In this chapter, we will investigate what have become the standard applications of the integral:

In this chapter, we will investigate what have become the standard applications of the integral: Chapter 8 Overview: Applications of Integrals Calculus, like most mathematical fields, began with trying to solve everyday problems. The theory and operations were formalized later. As early as 70 BC,

More information

MATH 54 - LECTURE 10

MATH 54 - LECTURE 10 MATH 54 - LECTURE 10 DAN CRYTSER The Universal Mapping Property First we note that each of the projection mappings π i : j X j X i is continuous when i X i is given the product topology (also if the product

More information

Background for Surface Integration

Background for Surface Integration Background for urface Integration 1 urface Integrals We have seen in previous work how to define and compute line integrals in R 2. You should remember the basic surface integrals that we will need to

More information

EXTREME POINTS AND AFFINE EQUIVALENCE

EXTREME POINTS AND AFFINE EQUIVALENCE EXTREME POINTS AND AFFINE EQUIVALENCE The purpose of this note is to use the notions of extreme points and affine transformations which are studied in the file affine-convex.pdf to prove that certain standard

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 35 Quadratic Programming In this lecture, we continue our discussion on

More information